The Fast Convergence of Boosting
نویسنده
چکیده
This manuscript considers the convergence rate of boosting under a large class of losses, including the exponential and logistic losses, where the best previous rate of convergence was O(exp(1/ )). First, it is established that the setting of weak learnability aids the entire class, granting a rate O(ln(1/ )). Next, the (disjoint) conditions under which the infimal empirical risk is attainable are characterized in terms of the sample and weak learning class, and a new proof is given for the known rate O(ln(1/ )). Finally, it is established that any instance can be decomposed into two smaller instances resembling the two preceding special cases, yielding a rate O(1/ ), with a matching lower bound for the logistic loss. The principal technical hurdle throughout this work is the potential unattainability of the infimal empirical risk; the technique for overcoming this barrier may be of general interest.
منابع مشابه
A Hybrid Framework for Building an Efficient Incremental Intrusion Detection System
In this paper, a boosting-based incremental hybrid intrusion detection system is introduced. This system combines incremental misuse detection and incremental anomaly detection. We use boosting ensemble of weak classifiers to implement misuse intrusion detection system. It can identify new classes types of intrusions that do not exist in the training dataset for incremental misuse detection. As...
متن کاملFast Training of Effective Multi-class Boosting Using Coordinate Descent Optimization
We present a novel column generation based boosting method for multi-class classification. Our multi-class boosting is formulated in a single optimization problem as in [1, 2]. Different from most existing multi-class boosting methods, which use the same set of weak learners for all the classes, we train class specified weak learners (i.e., each class has a different set of weak learners). We s...
متن کاملBoosting Based on a Smooth Margin
We study two boosting algorithms, Coordinate Ascent Boosting and Approximate Coordinate Ascent Boosting, which are explicitly designed to produce maximum margins. To derive these algorithms, we introduce a smooth approximation of the margin that one can maximize in order to produce a maximum margin classifier. Our first algorithm is simply coordinate ascent on this function, involving a line se...
متن کاملHarmonics Estimation in Power Systems using a Fast Hybrid Algorithm
In this paper a novel hybrid algorithm for harmonics estimation in power systems is proposed. The estimation of the harmonic components is a nonlinear problem due to the nonlinearity of phase of sinusoids in distorted waveforms. Most researchers implemented nonlinear methods to extract the harmonic parameters. However, nonlinear methods for amplitude estimation increase time of convergence. Hen...
متن کاملHigh-Dimensional $L_2$Boosting: Rate of Convergence
Boosting is one of the most significant developments in machine learning. This paper studies the rate of convergence of L2Boosting, which is tailored for regression, in a high-dimensional setting. Moreover, we introduce so-called “post-Boosting”. This is a post-selection estimator which applies ordinary least squares to the variables selected in the first stage by L2Boosting. Another variant is...
متن کاملRanking and Scoring Using Empirical Risk Minimization
A general model is proposed for studying ranking problems. We investigate learning methods based on empirical minimization of the natural estimates of the ranking risk. The empirical estimates are of the form of a U -statistic. Inequalities from the theory of U -statistics and U processes are used to obtain performance bounds for the empirical risk minimizers. Convex risk minimization methods a...
متن کامل